基于模型的强化学习方法在许多任务中实现了显着的样本效率,但它们的性能通常受模型错误的存在限制。为减少模型错误,以前的作品使用单一设计的网络来符合整个环境动态,将环境动态视为黑匣子。然而,这些方法缺乏考虑动态可能包含多个子动态的环境分解性,这可以单独建模,允许我们更准确地构建世界模型。在本文中,我们提出了环境动态分解(ED2),这是一种以分解方式模拟环境的新型世界模型施工框架。 ED2包含两个关键组件:子动力学发现(SD2)和动态分解预测(D2P)。 SD2发现环境中的子动力学,然后D2P构建子动力学后的分解世界模型。 ED2可以容易地与现有的MBRL算法和经验结果表明,ED2显着降低了模型误差,并提高了各种任务上最先进的MBRL算法的性能。
translated by 谷歌翻译
With the increasing growth of information through smart devices, increasing the quality level of human life requires various computational paradigms presentation including the Internet of Things, fog, and cloud. Between these three paradigms, the cloud computing paradigm as an emerging technology adds cloud layer services to the edge of the network so that resource allocation operations occur close to the end-user to reduce resource processing time and network traffic overhead. Hence, the resource allocation problem for its providers in terms of presenting a suitable platform, by using computational paradigms is considered a challenge. In general, resource allocation approaches are divided into two methods, including auction-based methods(goal, increase profits for service providers-increase user satisfaction and usability) and optimization-based methods(energy, cost, network exploitation, Runtime, reduction of time delay). In this paper, according to the latest scientific achievements, a comprehensive literature study (CLS) on artificial intelligence methods based on resource allocation optimization without considering auction-based methods in various computing environments are provided such as cloud computing, Vehicular Fog Computing, wireless, IoT, vehicular networks, 5G networks, vehicular cloud architecture,machine-to-machine communication(M2M),Train-to-Train(T2T) communication network, Peer-to-Peer(P2P) network. Since deep learning methods based on artificial intelligence are used as the most important methods in resource allocation problems; Therefore, in this paper, resource allocation approaches based on deep learning are also used in the mentioned computational environments such as deep reinforcement learning, Q-learning technique, reinforcement learning, online learning, and also Classical learning methods such as Bayesian learning, Cummins clustering, Markov decision process.
translated by 谷歌翻译